During a network partition, a banking app must choose Consistency over Availability to prevent data corruption and financial errors, while a Twitter 'Like' counter can choose Availability over Consistency because temporary inaccuracies are acceptable for user experience.
The choice between Availability and Consistency during a network partition depends entirely on the business domain and the consequences of each trade-off. For a banking app, the cost of inconsistency is catastrophic—duplicate transactions, incorrect balances, and financial losses. For a Twitter 'Like' counter, the cost of unavailability is user frustration, while temporary inconsistency is barely noticeable. The CAP theorem forces this explicit trade-off, and different systems make opposite choices for good reason.
Why Consistency Wins: In banking, data correctness is paramount. If a network partition occurs, showing a user an incorrect balance or allowing a double-spend could have legal and financial consequences. The system must prioritize consistency even if that means rejecting transactions until the partition heals.
What Happens During Partition: The system becomes partially unavailable. Some users may see errors when trying to transfer money or check balances. However, any transaction that completes is guaranteed to be correct.
Real-World Example: When a bank's network partition occurs, ATM withdrawals might be limited or declined, and online banking might show a "service unavailable" message rather than risking displaying incorrect balances. This matches CP behavior—consistency is preserved at the cost of availability.
Write Concern Configuration: In MongoDB, a banking app would use { w: 'majority', j: true }. During a partition, if a majority of nodes cannot acknowledge writes, the system rejects writes rather than committing potentially unreplicated data.
Why Availability Wins: Like counters are eventually consistent by nature. A user might see a slight discrepancy between the number of likes shown and the actual count for a few seconds, but this has no material impact. Users would rather the app work and show a slightly stale count than see an error or timeout.
What Happens During Partition: The system continues to accept and show likes, even if nodes can't immediately synchronize. When the partition resolves, counts reconcile. Users experience no interruption.
Real-World Example: During a network issue, you can still like a tweet, and the counter increments immediately (eventually consistent). Even if the number shown differs slightly across users, the core functionality remains available. Twitter/X explicitly prioritizes availability for these non-critical counters.
Write Concern Configuration: A social media app might use { w: 1 } with no journal requirement, allowing writes to proceed as long as one node accepts them, maximizing availability during network issues.
Banking App Cost of Inconsistency: Regulatory fines, customer lawsuits, fraud, incorrect interest calculations, reconciliation failures. Severity: Critical.
Banking App Cost of Unavailability: Customer frustration, missed transactions, support calls. Severity: Significant but temporary.
Twitter Like Counter Cost of Inconsistency: Slightly inaccurate display for seconds/minutes. Severity: Minimal, users rarely notice.
Twitter Like Counter Cost of Unavailability: Inability to engage with content, degraded user experience, reduced engagement metrics. Severity: Moderate for user retention.
Decision Framework: If temporary inconsistency causes legal/financial harm, choose consistency. If unavailability harms user engagement more than temporary inaccuracy, choose availability.
Modern systems often mix these trade-offs at different layers. A banking app might use CP for core ledger operations but AP for displaying transaction history with eventual consistency. Twitter uses CP for direct messages (you don't want messages lost) but AP for likes and follower counts. Understanding these trade-offs is critical to designing systems that meet business requirements while using the right tools for each workload.